Dataset name: Category choices study 1
Data on human higher-order cognition and learning in a categorization task. The task involved that participants classified images of plankton specimen that had three features (eye, claw, and tail of the plankton) into one of two classes (species A and species B), that means a visual supervised classification learning task. In total there were 5 different plankton that were shown repeatedly (images see Fig. 5 in the paper). A plankton was randomly drawn from a distribution (Table 1 in the paper), and each plankton image corresponded to one feature combination and belonged deterministically to one class. In each trial participants saw one plankton, classified it by pressing a key, and received fedback about the true class of the plankton (shown as letters ‘A’ or ‘B’ and a smile emoticon after a correct classification and a frown emoticon otherwise). Participants were instructed to consistently decide for the most likely class. Another stimulus appeared, and learning continued until participants had learned the correct class for each stimulus based on (a) having made maximally four classification errors within the latest 200 trials (98% of 200 correct), and (b) selecting the most likely class for the last five times that each single plankton stimulus appeared within the random sequence of stimuli. Learning was self-paced, the data contains reaction times. Note, that in the images, which visual feature dimesion (eye, claw, tail) corresponded to which feature dimension in the classification and the class labels were randomized across participants. The data contains the de-randomized features-class associations.
Temporal Coverage: September to December 2012
Spatial Coverage: Berlin, Germany
Citation: Jarecki, J. B., Meder, B., & Nelson, J. D. (2017). Naive and Robust: Class-Conditional Independence in Human Classification Learning. Cognitive Science, 42(1), 4-42. https://doi.org/10.1111/cogs.12496
Identifier: https://doi.org/10.1111/cogs.12496
Date published: 2024-11-24
Creator:
| name | value |
|---|---|
| @type | Person |
| givenName | Jana B. |
| familyName | Jarecki |
| affiliation | Organization , Max-Planck-Institute for Human Development, Berlin |
|
|
|
The following JSON-LD can be found by search engines, if you share this codebook publicly on the web.
{
"citation": "Jarecki, J. B., Meder, B., & Nelson, J. D. (2017). Naive and\n Robust: Class-Conditional Independence in Human Classification Learning.\n Cognitive Science, 42(1), 4-42. https://doi.org/10.1111/cogs.12496",
"identifier": "https://doi.org/10.1111/cogs.12496",
"creator": {
"@type": "Person",
"givenName": "Jana B.",
"familyName": "Jarecki",
"affiliation": {
"@type": "Organization",
"name": "Max-Planck-Institute for Human Development, Berlin"
}
},
"spatialCoverage": "Berlin, Germany",
"temporalCoverage": "September to December 2012",
"measurementTechnique": "Computerized cognitive learning task",
"funder": "MPI Berlin. Partial funding by grants NE 1713/1-2 to JDN, and ME 3717/2-2 to BM, from the Deutsche Forschungsgemeinschaft (DFG) as part of the priority program 'New Frame- works of Rationality' (SPP 1516).",
"keywords": ["behavioral data", "experimental data", "classification", "class-conditional independence", "learning", "naive Bayes", "Markov property", "Bayesian model", "probabilistic inference", "heuristics", "machine-learning", "human data"],
"name": "Category choices study 1",
"description": "Data on human higher-order cognition and learning in a categorization task. The task involved that participants classified images of plankton specimen that had three features (eye, claw, and tail of the plankton) into one of two classes (species A and species B), that means a visual supervised classification learning task. In total there were 5 different plankton that were shown repeatedly (images see Fig. 5 in the paper). A plankton was randomly drawn from a distribution (Table 1 in the paper), and each plankton image corresponded to one feature combination and belonged deterministically to one class. In each trial participants saw one plankton, classified it by pressing a key, and received fedback about the true class of the plankton (shown as letters 'A' or 'B' and a smile emoticon after a correct classification and a frown emoticon otherwise). Participants were instructed to consistently decide for the most likely class. Another stimulus appeared, and learning continued until participants had learned the correct class for each stimulus based on (a) having made maximally four classification errors within the latest 200 trials (98% of 200 correct), and (b) selecting the most likely class for the last five times that each single plankton stimulus appeared within the random sequence of stimuli. Learning was self-paced, the data contains reaction times. Note, that in the images, which visual feature dimesion (eye, claw, tail) corresponded to which feature dimension in the classification and the class labels were randomized across participants. The data contains the de-randomized features-class associations.\n\n\n## Table of variables\nThis table contains variable names, labels, and number of missing values.\nSee the complete codebook for more.\n\n|name |label | n_missing|\n|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------|---------:|\n|i |Pseudonymized participant ID | 0|\n|t |Trial number per particiant, maximum mot identical for participants | 0|\n|e |The stimulus in the trial, features (numbers) and - the associated true catgory (letter), which participants - are instructed to learn | 0|\n|ch |Observed choice given the stimulus, A or B | 0|\n|smiley |Feedback about the correctness of class choice | 0|\n|tdCh |time interval from this trial's start to the class choice - in deciseconds, i.e. tenths of seconds | 30|\n|tdNex |time interval from this trial's start to the key press to see next - stimulus in in deciseconds, i.e. tenths of seconds | 0|\n|tStart |absolute time of the start of the trial | 0|\n|tCh |time interval from this trial's start to the class choice in miliseconds | 0|\n|tNex |time interval from this trial's start to the key press to see next stimulus in miliseconds | 0|\n|form |visual (randomized) form in which the features of the stimulus appeared | 0|\n\n### Note\nThis dataset was automatically described using the [codebook R package](https://rubenarslan.github.io/codebook/) (version 0.9.5).",
"datePublished": "2024-11-24",
"@context": "https://schema.org/",
"@type": "Dataset",
"variableMeasured": [
{
"name": "i",
"description": "Pseudonymized participant ID",
"@type": "propertyValue"
},
{
"name": "t",
"description": "Trial number per particiant, maximum mot identical for participants",
"@type": "propertyValue"
},
{
"name": "e",
"description": "The stimulus in the trial, features (numbers) and \n the associated true catgory (letter), which participants \n are instructed to learn",
"value": "1. 000A,\n2. 001B,\n3. 010B,\n4. 100B,\n5. 111A",
"@type": "propertyValue"
},
{
"name": "ch",
"description": "Observed choice given the stimulus, A or B",
"@type": "propertyValue"
},
{
"name": "smiley",
"description": "Feedback about the correctness of class choice",
"value": "s. smiley, feedback about correct class chosen,\nf. frowney, feedback about incorrect class chosen",
"maxValue": "s",
"minValue": "f",
"@type": "propertyValue"
},
{
"name": "tdCh",
"description": "time interval from this trial's start to the class choice \n in deciseconds, i.e. tenths of seconds",
"@type": "propertyValue"
},
{
"name": "tdNex",
"description": "time interval from this trial's start to the key press to see next \n stimulus in in deciseconds, i.e. tenths of seconds",
"@type": "propertyValue"
},
{
"name": "tStart",
"description": "absolute time of the start of the trial",
"@type": "propertyValue"
},
{
"name": "tCh",
"description": "time interval from this trial's start to the class choice in miliseconds",
"@type": "propertyValue"
},
{
"name": "tNex",
"description": "time interval from this trial's start to the key press to see next stimulus in miliseconds",
"@type": "propertyValue"
},
{
"name": "form",
"description": "visual (randomized) form in which the features of the stimulus appeared",
"value": "1. bfc,\n2. bfo,\n3. bmc,\n4. bmo,\n5. ffc,\n6. ffo,\n7. fmc,\n8. fmo",
"@type": "propertyValue"
}
]
}`
Dataset name: Category choices study 2
Data on human higher-order cognition and learning in a categorization task. The task involved that participants classified images of plankton specimen that had three features (eye, claw, and tail of the plankton) into one of two classes (species A and species B), that means a visual supervised classification learning task. In total there were 5 different plankton that were shown repeatedly (images see Fig. 5 in the paper). A plankton was randomly drawn from a distribution (Table 2 in the paper), and each plankton image corresponded to one feature combination and belonged with a probability to one class. In each trial participants saw one plankton, classified it by pressing a key, and received fedback about the true class of the plankton (shown as letters ‘A’ or ‘B’ and a smile emoticon after a correct classification and a frown emoticon otherwise). Participants were instructed to consistently decide for the most likely class. Another stimulus appeared, and learning continued until participants had learned the correct class for each stimulus based on (a) having made maximally four classification errors within the latest 200 trials (98% of 200 correct), and (b) selecting the most likely class for the last five times that each single plankton stimulus appeared within the random sequence of stimuli. Learning was self-paced, the data contains reaction times. Note, that in the images, which visual feature dimesion (eye, claw, tail) corresponded to which feature dimension in the classification and the class labels were randomized across participants. The data contains the de-randomized features-class associations.
Temporal Coverage: September to December 2012
Spatial Coverage: Berlin, Germany
Citation: Jarecki, J. B., Meder, B., & Nelson, J. D. (2017). Naive and Robust: Class-Conditional Independence in Human Classification Learning. Cognitive Science, 42(1), 4-42. https://doi.org/10.1111/cogs.12496
Identifier: https://doi.org/10.1111/cogs.12496
Date published: 2024-11-24
Creator:
| name | value |
|---|---|
| @type | Person |
| givenName | Jana B. |
| familyName | Jarecki |
| affiliation | Organization , Max-Planck-Institute for Human Development, Berlin |
|
|
|
The following JSON-LD can be found by search engines, if you share this codebook publicly on the web.
{
"citation": "Jarecki, J. B., Meder, B., & Nelson, J. D. (2017). Naive and\n Robust: Class-Conditional Independence in Human Classification Learning.\n Cognitive Science, 42(1), 4-42. https://doi.org/10.1111/cogs.12496",
"identifier": "https://doi.org/10.1111/cogs.12496",
"creator": {
"@type": "Person",
"givenName": "Jana B.",
"familyName": "Jarecki",
"affiliation": {
"@type": "Organization",
"name": "Max-Planck-Institute for Human Development, Berlin"
}
},
"spatialCoverage": "Berlin, Germany",
"temporalCoverage": "September to December 2012",
"measurementTechnique": "Computerized cognitive learning task",
"funder": "MPI Berlin. Partial funding by grants NE 1713/1-2 to JDN, and ME 3717/2-2 to BM, from the Deutsche Forschungsgemeinschaft (DFG) as part of the priority program 'New Frame- works of Rationality' (SPP 1516).",
"keywords": ["behavioral data", "experimental data", "classification", "class-conditional independence", "learning", "naive Bayes", "Markov property", "Bayesian model", "probabilistic inference", "heuristics", "machine-learning", "human data"],
"name": "Category choices study 2",
"description": "Data on human higher-order cognition and learning in a categorization task. The task involved that participants classified images of plankton specimen that had three features (eye, claw, and tail of the plankton) into one of two classes (species A and species B), that means a visual supervised classification learning task. In total there were 5 different plankton that were shown repeatedly (images see Fig. 5 in the paper). A plankton was randomly drawn from a distribution (Table 2 in the paper), and each plankton image corresponded to one feature combination and belonged with a probability to one class. In each trial participants saw one plankton, classified it by pressing a key, and received fedback about the true class of the plankton (shown as letters 'A' or 'B' and a smile emoticon after a correct classification and a frown emoticon otherwise). Participants were instructed to consistently decide for the most likely class. Another stimulus appeared, and learning continued until participants had learned the correct class for each stimulus based on (a) having made maximally four classification errors within the latest 200 trials (98% of 200 correct), and (b) selecting the most likely class for the last five times that each single plankton stimulus appeared within the random sequence of stimuli. Learning was self-paced, the data contains reaction times. Note, that in the images, which visual feature dimesion (eye, claw, tail) corresponded to which feature dimension in the classification and the class labels were randomized across participants. The data contains the de-randomized features-class associations.\n\n\n## Table of variables\nThis table contains variable names, labels, and number of missing values.\nSee the complete codebook for more.\n\n|name |label | n_missing|\n|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------|---------:|\n|i |Pseudonymized participant ID | 0|\n|t |Trial number per particiant, maximum mot identical for participants | 0|\n|e |The stimulus in the trial, features (numbers) and - the associated true catgory (letter), which participants - are instructed to learn | 0|\n|ch |Observed choice given the stimulus, A or B | 0|\n|smiley |Feedback about the correctness of class choice | 0|\n|tdCh |time interval from this trial's start to the class choice - in deciseconds, i.e. tenths of seconds | 29|\n|tdNex |time interval from this trial's start to the key press to see next - stimulus in in deciseconds, i.e. tenths of seconds | 0|\n|tStart |absolute time of the start of the trial | 0|\n|tCh |time interval from this trial's start to the class choice in miliseconds | 0|\n|tNex |time interval from this trial's start to the key press to see next stimulus in miliseconds | 0|\n|form |visual (randomized) form in which the features of the stimulus appeared | 0|\n\n### Note\nThis dataset was automatically described using the [codebook R package](https://rubenarslan.github.io/codebook/) (version 0.9.5).",
"datePublished": "2024-11-24",
"@context": "https://schema.org/",
"@type": "Dataset",
"variableMeasured": [
{
"name": "i",
"description": "Pseudonymized participant ID",
"@type": "propertyValue"
},
{
"name": "t",
"description": "Trial number per particiant, maximum mot identical for participants",
"@type": "propertyValue"
},
{
"name": "e",
"description": "The stimulus in the trial, features (numbers) and \n the associated true catgory (letter), which participants \n are instructed to learn",
"value": "1. 000A,\n2. 000B,\n3. 001A,\n4. 001B,\n5. 010A,\n6. 010B,\n7. 100A,\n8. 100B,\n9. 111A,\n10. 111B",
"@type": "propertyValue"
},
{
"name": "ch",
"description": "Observed choice given the stimulus, A or B",
"@type": "propertyValue"
},
{
"name": "smiley",
"description": "Feedback about the correctness of class choice",
"value": "s. smiley, feedback about correct class chosen,\nf. frowney, feedback about incorrect class chosen",
"maxValue": "s",
"minValue": "f",
"@type": "propertyValue"
},
{
"name": "tdCh",
"description": "time interval from this trial's start to the class choice \n in deciseconds, i.e. tenths of seconds",
"@type": "propertyValue"
},
{
"name": "tdNex",
"description": "time interval from this trial's start to the key press to see next \n stimulus in in deciseconds, i.e. tenths of seconds",
"@type": "propertyValue"
},
{
"name": "tStart",
"description": "absolute time of the start of the trial",
"@type": "propertyValue"
},
{
"name": "tCh",
"description": "time interval from this trial's start to the class choice in miliseconds",
"@type": "propertyValue"
},
{
"name": "tNex",
"description": "time interval from this trial's start to the key press to see next stimulus in miliseconds",
"@type": "propertyValue"
},
{
"name": "form",
"description": "visual (randomized) form in which the features of the stimulus appeared",
"value": "1. bfc,\n2. bfo,\n3. bmc,\n4. bmo,\n5. ffc,\n6. ffo,\n7. fmc,\n8. fmo",
"@type": "propertyValue"
}
]
}`